128 research outputs found

    Dynamic robust duality in utility maximization

    Full text link
    A celebrated financial application of convex duality theory gives an explicit relation between the following two quantities: (i) The optimal terminal wealth X(T):=Xφ(T)X^*(T) : = X_{\varphi^*}(T) of the problem to maximize the expected UU-utility of the terminal wealth Xφ(T)X_{\varphi}(T) generated by admissible portfolios φ(t),0tT\varphi(t), 0 \leq t \leq T in a market with the risky asset price process modeled as a semimartingale; (ii) The optimal scenario dQdP\frac{dQ^*}{dP} of the dual problem to minimize the expected VV-value of dQdP\frac{dQ}{dP} over a family of equivalent local martingale measures QQ, where VV is the convex conjugate function of the concave function UU. In this paper we consider markets modeled by It\^o-L\'evy processes. In the first part we use the maximum principle in stochastic control theory to extend the above relation to a \emph{dynamic} relation, valid for all t[0,T]t \in [0,T]. We prove in particular that the optimal adjoint process for the primal problem coincides with the optimal density process, and that the optimal adjoint process for the dual problem coincides with the optimal wealth process, 0tT0 \leq t \leq T. In the terminal time case t=Tt=T we recover the classical duality connection above. We get moreover an explicit relation between the optimal portfolio φ\varphi^* and the optimal measure QQ^*. We also obtain that the existence of an optimal scenario is equivalent to the replicability of a related TT-claim. In the second part we present robust (model uncertainty) versions of the optimization problems in (i) and (ii), and we prove a similar dynamic relation between them. In particular, we show how to get from the solution of one of the problems to the other. We illustrate the results with explicit examples

    A stochastic HJB equation for optimal control of forward-backward SDEs

    Full text link
    We study optimal stochastic control problems of general coupled systems of forward-backward stochastic differential equations with jumps. By means of the It\^o-Ventzell formula the system is transformed to a controlled backward stochastic partial differential equation (BSPDE) with jumps. Using a comparison principle for such BSPDEs we obtain a general stochastic Hamilton-Jacobi- Bellman (HJB) equation for such control problems. In the classical Markovian case with optimal control of jump diffusions, the equation reduces to the classical HJB equation. The results are applied to study risk minimization in financial markets

    A comparison theorem for backward SPDEs with jumps

    Full text link
    In this paper we obtain a comparison theorem for backward stochastic partial differential equation (SPDEs) with jumps. We apply it to introduce space-dependent convex risk measures as a model for risk in large systems of interacting components

    Singular mean-field control games with applications to optimal harvesting and investment problems

    Full text link
    This paper studies singular mean field control problems and singular mean field stochastic differential games. Both sufficient and necessary conditions for the optimal controls and for the Nash equilibrium are obtained. Under some assumptions the optimality conditions for singular mean-field control are reduced to a reflected Skorohod problem, whose solution is proved to exist uniquely. Applications are given to optimal harvesting of stochastic mean-field systems, optimal irreversible investments under uncertainty and to mean-field singular investment games. In particular, a simple singular mean-field investment game is studied where the Nash equilibrium exists but is not unique

    Generalized Dynkin Games and Doubly Reflected BSDEs with Jumps

    Full text link
    We introduce a generalized Dynkin game problem with non linear conditional expectation E{\cal E} induced by a Backward Stochastic Differential Equation (BSDE) with jumps. Let ξ,ζ\xi, \zeta be two RCLL adapted processes with ξζ\xi \leq \zeta. The criterium is given by \begin{equation*} {\cal J}_{\tau, \sigma}= {\cal E}_{0, \tau \wedge \sigma } \left(\xi_{\tau}\textbf{1}_{\{ \tau \leq \sigma\}}+\zeta_{\sigma}\textbf{1}_{\{\sigma<\tau\}}\right) \end{equation*} where τ\tau and σ \sigma are stopping times valued in [0,T][0,T]. Under Mokobodski's condition, we establish the existence of a value function for this game, i.e. infσsupτJτ,σ=supτinfσJτ,σ\inf_{\sigma}\sup_{\tau} {\cal J}_{\tau, \sigma} = \sup_{\tau} \inf_{\sigma} {\cal J}_{\tau, \sigma}. This value can be characterized via a doubly reflected BSDE. Using this characterization, we provide some new results on these equations, such as comparison theorems and a priori estimates. When ξ\xi and ζ\zeta are left upper semicontinuous along stopping times, we prove the existence of a saddle point. We also study a generalized mixed game problem when the players have two actions: continuous control and stopping. We then address the generalized Dynkin game in a Markovian framework and its links with parabolic partial integro-differential variational inequalities with two obstacles

    Mixed generalized Dynkin game and stochastic control in a Markovian framework

    Full text link
    We introduce a mixed {\em generalized} Dynkin game/stochastic control with Ef{\cal E}^f-expectation in a Markovian framework. We study both the case when the terminal reward function is supposed to be Borelian only and when it is continuous. We first establish a weak dynamic programming principle by using some refined results recently provided in \cite{DQS} and some properties of doubly reflected BSDEs with jumps (DRBSDEs). We then show a stronger dynamic programming principle in the continuous case, which cannot be derived from the weak one. In particular, we have to prove that the value function of the problem is continuous with respect to time tt, which requires some technical tools of stochastic analysis and some new results on DRBSDEs. We finally study the links between our mixed problem and generalized Hamilton Jacobi Bellman variational inequalities in both cases

    A Weak Dynamic Programming Principle for Combined Optimal Stopping and Stochastic Control with Ef\mathcal{E}^f- expectations

    Get PDF
    We study a combined optimal control/stopping problem under a nonlinear expectation Ef{\cal E}^f induced by a BSDE with jumps, in a Markovian framework. The terminal reward function is only supposed to be Borelian. The value function uu associated with this problem is generally irregular. We first establish a {\em sub- (resp. super-) optimality principle of dynamic programming} involving its {\em upper- (resp. lower-) semicontinuous envelope} uu^* (resp. uu_*). This result, called {\em weak} dynamic programming principle (DPP), extends that obtained in \cite{BT} in the case of a classical expectation to the case of an Ef{\cal E}^f-expectation and Borelian terminal reward function. Using this {\em weak} DPP, we then prove that uu^* (resp. uu_*) is a {\em viscosity sub- (resp. super-) solution} of a nonlinear Hamilton-Jacobi-Bellman variational inequality
    corecore